So welcome to the last week of AI in 2022.
We're doing logic still, and we will be for a while still.
What I would like to show you this week is, or today, is some of the things you need to
do or you can do to make a very simple procedure like the DPLL procedure for propositional
satisfiability kind of become competitive.
It's a mixture of out of engineering, how to implement things very well using efficient
data structures and so on, to be able to deal with large data structures that probably have
a lot of repetition.
You want to build in sharing and all of those things.
One of the things that plays almost no role is using system near programming languages
like C. The factor of three or so that you can get by using C over something else is
negligible.
The real, real big speed up come from things like sharing, doing lots of things once instead
of hundreds of thousands of times, and finding structures, using structures, learning stuff.
We want to look at a technique called clause learning today, which is weird and wonderful
and a bit of a mind twister, but turns out to be one of the kind of secret ingredients
that make DPLL so effective.
DPLL in and of itself is extremely simple.
We have essentially two rules.
One is unit propagation.
We looked at it.
If we have unit clauses, as we do here, the R true, then we propagate the unit, which
means we'll just delete the literals that it makes true.
We are looking, remember, we're in a test calculus, so we're looking for empty clauses,
and if a literal is satisfied anyway, it can never contribute to an empty clause, so we
can leave it out.
Even better, the opposite literals, any clause that has an opposite literal can also be left
out.
We can simplify the clauses, and that's what we're using in unit propagation.
We decide on, in this case, we decide to make R true, then we can simplify the clauses,
and if we're lucky, as this example shows you, that actually propagates, gives us new
units, and then we're done.
Not here.
If we can't propagate, then what we have to do is we basically, we split.
We have to systematically, we have a don't know decision.
Do we want to make P true or false?
We don't know, so we have to search over this.
Essentially, what we're getting is backtracking search.
Looks exactly in green, you could say, as for CSP.
The nice thing is that also many of the intuitions and so on apply.
We're going to look at examples, so we have this little example, but we also have this
example.
This is made, if you look at it, we're adding two clauses, X1 true up to X100 true, and
another one in false, and they're made to essentially get DPLL into a huge search tree.
Two to 100, that's a million leaves down here, of which I'm only showing you four, but there's
a lot in between here.
That's the worst thing that can happen, and we know that, convince yourself of it, that
these two clauses are not going to contribute.
All of those things, the whole tree here is actually going to fail.
We're basically making a honey pot here for the procedure, because satisfiable branches
Presenters
Zugänglich über
Offener Zugang
Dauer
01:33:33 Min
Aufnahmedatum
2022-12-21
Hochgeladen am
2022-12-22 13:29:07
Sprache
en-US